5 research outputs found

    Requirements engineering for explainable systems

    Get PDF
    Information systems are ubiquitous in modern life and are powered by evermore complex algorithms that are often difficult to understand. Moreover, since systems are part of almost every aspect of human life, the quality in interaction and communication between humans and machines has become increasingly important. Hence the importance of explainability as an essential element of human-machine communication; it has also become an important quality requirement for modern information systems. However, dealing with quality requirements has never been a trivial task. To develop quality systems, software professionals have to understand how to transform abstract quality goals into real-world information system solutions. Requirements engineering provides a structured approach that aids software professionals in better comprehending, evaluating, and operationalizing quality requirements. Explainability has recently regained prominence and been acknowledged and established as a quality requirement; however, there is currently no requirements engineering recommendations specifically focused on explainable systems. To fill this gap, this thesis investigated explainability as a quality requirement and how it relates to the information systems context, with an emphasis on requirements engineering. To this end, this thesis proposes two theories that delineate the role of explainability and establish guidelines for the requirements engineering process of explainable systems. These theories are modeled and shaped through five artifacts. These theories and artifacts should help software professionals 1) to communicate and achieve a shared understanding of the concept of explainability; 2) to comprehend how explainability affects system quality and what role it plays; 3) in translating abstract quality goals into design and evaluation strategies; and 4) to shape the software development process for the development of explainable systems. The theories and artifacts were built and evaluated through literature studies, workshops, interviews, and a case study. The findings show that the knowledge made available helps practitioners understand the idea of explainability better, facilitating the creation of explainable systems. These results suggest that the proposed theories and artifacts are plausible, practical, and serve as a strong starting point for further extensions and improvements in the search for high-quality explainable systems

    Explainability as a non-functional requirement: challenges and recommendations

    Get PDF
    Software systems are becoming increasingly complex. Their ubiquitous presence makes users more dependent on their correctness in many aspects of daily life. As a result, there is a growing need to make software systems and their decisions more comprehensible, with more transparency in software-based decision making. Transparency is therefore becoming increasingly important as a non-functional requirement. However, the abstract quality aspect of transparency needs to be better understood and related to mechanisms that can foster it. The integration of explanations into software has often been discussed as a solution to mitigate system opacity. Yet, an important first step is to understand user requirements in terms of explainable software behavior: Are users really interested in software transparency and are explanations considered an appropriate way to achieve it? We conducted a survey with 107 end users to assess their opinion on the current level of transparency in software systems and what they consider to be the main advantages and disadvantages of embedded explanations. We assess the relationship between explanations and transparency and analyze its potential impact on software quality. As explainability has become an important issue, researchers and professionals have been discussing how to deal with it in practice. While there are differences of opinion on the need for built-in explanations, understanding this concept and its impact on software is a key step for requirements engineering. Based on our research results and on the study of existing literature, we offer recommendations for the elicitation and analysis of explainability and discuss strategies for the practice. © 2020, The Author(s)

    Explainable software systems: from requirements analysis to system evaluation

    Get PDF
    The growing complexity of software systems and the influence of software-supported decisions in our society sparked the need for software that is transparent, accountable, and trustworthy. Explainability has been identified as a means to achieve these qualities. It is recognized as an emerging non-functional requirement (NFR) that has a significant impact on system quality. Accordingly, software engineers need means to assist them in incorporating this NFR into systems. This requires an early analysis of the benefits and possible design issues that arise from interrelationships between different quality aspects. However, explainability is currently under-researched in the domain of requirements engineering, and there is a lack of artifacts that support the requirements engineering process and system design. In this work, we remedy this deficit by proposing four artifacts: a definition of explainability, a conceptual model, a knowledge catalogue, and a reference model for explainable systems. These artifacts should support software and requirements engineers in understanding the definition of explainability and how it interacts with other quality aspects. Besides that, they may be considered a starting point to provide practical value in the refinement of explainability from high-level requirements to concrete design choices, as well as on the identification of methods and metrics for the evaluation of the implemented requirements

    Explainable software systems: from requirements analysis to system evaluation

    Get PDF
    The growing complexity of software systems and the influence of software-supported decisions in our society sparked the need for software that is transparent, accountable, and trustworthy. Explainability has been identified as a means to achieve these qualities. It is recognized as an emerging non-functional requirement (NFR) that has a significant impact on system quality. Accordingly, software engineers need means to assist them in incorporating this NFR into systems. This requires an early analysis of the benefits and possible design issues that arise from interrelationships between different quality aspects. However, explainability is currently under-researched in the domain of requirements engineering, and there is a lack of artifacts that support the requirements engineering process and system design. In this work, we remedy this deficit by proposing four artifacts: a definition of explainability, a conceptual model, a knowledge catalogue, and a reference model for explainable systems. These artifacts should support software and requirements engineers in understanding the definition of explainability and how it interacts with other quality aspects. Besides that, they may be considered a starting point to provide practical value in the refinement of explainability from high-level requirements to concrete design choices, as well as on the identification of methods and metrics for the evaluation of the implemented requirements

    Welcome to the First International Workshop on Requirements Engineering for Explainable Systems (RE4ES)

    No full text
    Welcome to the First International Workshop on Requirements Engineering for Explainable Systems (RE4ES), where we aim to advance requirements engineering (RE) for explainable systems, foster interdisciplinary exchange, and build a community. On the one hand, we believe that the methods and techniques of the RE community can add much value to explainability research. On the other hand, we have to ensure that we develop techniques fitted to the needs of other communities.This first workshop explores synergies between the RE community and other communities already researching explainability.To this end, we have based our agenda on a mix of paper presentations from authors of different domains, one keynote from industry and one from research, as well as interactive activities to stimulate lively discussions
    corecore